skip to main content


Search for: All records

Creators/Authors contains: "Narasimhan, Srinivasa G."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 1, 2024
  2. Free, publicly-accessible full text available June 1, 2024
  3. null (Ed.)
  4. Reconstructing 4D vehicular activity (3D space and time) from cameras is useful for autonomous vehicles, commuters and local authorities to plan for smarter and safer cities. Traffic is inherently repetitious over long periods, yet current deep learning-based 3D reconstruction methods have not considered such repetitions and have difficulty generalizing to new intersection-installed cameras. We present a novel approach exploiting longitudinal (long-term) repetitious motion as self-supervision to reconstruct 3D vehicular activity from a video captured by a single fixed camera. Starting from off-the-shelf 2D keypoint detections, our algorithm optimizes 3D vehicle shapes and poses, and then clusters their trajectories in 3D space. The 2D keypoints and trajectory clusters accumulated over long-term are later used to improve the 2D and 3D keypoints via self-supervision without any human annotation. Our method improves reconstruction accuracy over state of the art on scenes with a significant visual difference from the keypoint detector’s training data, and has many applications including velocity estimation, anomaly detection and vehicle counting. We demonstrate results on traffic videos captured at multiple city intersections, collected using our smartphones, YouTube, and other public datasets. 
    more » « less
  5. Active sensing through the use of Adaptive Depth Sensors is a nascent field, with potential in areas such as Advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensor being the Triangulation Light Curtains (LC). In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a Light Curtain’s laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a Light Curtain. 
    more » « less
  6. A conventional optical lens can be used to focus light into the target medium from outside, without disturbing the medium. The focused spot size is proportional to the focal distance in a conventional lens, resulting in a tradeoff between penetration depth in the target medium and spatial resolution. We have shown that virtual ultrasonically sculpted gradient-index (GRIN) optical waveguides can be formed in the target medium to steer light without disturbing the medium. Here, we demonstrate that such virtual waveguides can relay an externally focused Gaussian beam of light through the medium beyond the focal distance of a single external physical lens, to extend the penetration depth without compromising the spot size. Moreover, the spot size can be tuned by reconfiguring the virtual waveguide. We show that these virtual GRIN waveguides can be formed in transparent and turbid media, to enhance the confinement and contrast ratio of the focused beam of light at the target location. This method can be extended to realize complex optical systems of external physical lenses and in situ virtual waveguides, to extend the reach and flexibility of optical methods.

     
    more » « less